Learning Grasp Affordances with Variable Tool Point Offsets
نویسندگان
چکیده
When grasping an object, a robot must identify the available forms of interaction with that object. Each of these forms of interaction, a grasp affordance, describes one canonical option for placing the hand and fingers with respect to the object as an agent prepares to grasp it. The affordance does not represent a single hand posture, but an entire manifold within a space that describes hand position/orientation and finger configuration. Our challenges are 1) how to represent this manifold in as compact a manner as possible, and 2) how to extract these affordance representations given a set of example grasps as demonstrated by a human teacher. In this paper, we approach the problem of representation by capturing all instances of a canonical grasp using a joint probability density function (PDF) in a hand posture space. The PDF captures in an object-centered coordinate frame a combination of hand orientation, tool point position and offset from hand to tool point. The set of canonical grasps is then represented using a mixture distribution model. We address the problem of learning the model parameters from a set of example grasps using a clustering approach based on expectation maximization. Our experiments show that the learned canonical grasps correspond to the functionally different ways that the object may be grasped. In addition, by including the tool point/hand relationship within the learned model, the approach is capable of separating different grasp types, even when the different types involve similar hand postures.
منابع مشابه
Localizing Handle-Like Grasp Affordances in 3D Point Clouds
We propose a new approach to localizing handle-like grasp affordances in 3-D point clouds. The main idea is to identify a set of sufficient geometric conditions for the existence of a grasp affordance and to search the point cloud for neighborhoods that satisfy these conditions. Our goal is not to find all possible grasp affordances, but instead to develop a method of localizing important types...
متن کاملLearning Continuous Grasp Affordances by Sensorimotor Exploration
We develop means of learning and representing object grasp affordances probabilistically. By grasp affordance, we refer to an entity that is able to assess whether a given relative object-gripper configuration will yield a stable grasp. These affordances are represented with grasp densities, continuous probability density functions defined on the space of 3D positions and orientations. Grasp de...
متن کاملRelational Affordance Learning for Task-Dependent Robot Grasping
Robot grasping depends on the specific manipulation scenario: the object, its properties, task and grasp constraints. Object-task affordances facilitate semantic reasoning about pre-grasp configurations with respect to the intended tasks, favouring good grasps. We employ probabilistic rule learning to recover such object-task affordances for task-dependent grasping from realistic video data.
متن کاملLearning Objects and Grasp Affordances through Autonomous Exploration
We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation, which are then augmented by continuous characterizations of grasp affordances generated through bia...
متن کاملLocalizing Grasp Affordances in 3-D Points Clouds Using Taubin Quadric Fitting
Perception-for-grasping is a challenging problem in robotics. Inexpensive range sensors such as the Microsoft Kinect provide sensing capabilities that have given new life to the effort of developing robust and accurate perception methods for robot grasping. This paper proposes a new approach to localizing enveloping grasp affordances in 3-D point clouds efficiently. The approach is based on mod...
متن کامل